Goto

Collaborating Authors

 generating adjacency-constrained subgoal


Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning

Neural Information Processing Systems

Goal-conditioned hierarchical reinforcement learning (HRL) is a promising approach for scaling up reinforcement learning (RL) techniques. However, it often suffers from training inefficiency as the action space of the high-level, i.e., the goal space, is often large. Searching in a large goal space poses difficulties for both high-level subgoal generation and low-level policy learning. In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a k-step adjacent region of the current state using an adjacency constraint. We theoretically prove that the proposed adjacency constraint preserves the optimal hierarchical policy in deterministic MDPs, and show that this constraint can be practically implemented by training an adjacency network that can discriminate between adjacent and non-adjacent subgoals. Experimental results on discrete and continuous control tasks show that incorporating the adjacency constraint improves the performance of state-of-the-art HRL approaches in both deterministic and stochastic environments.


Review for NeurIPS paper: Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning

Neural Information Processing Systems

Additional Feedback: Overall, the paper is quite well-written and the motivation and idea are simple and interesting. Except for the two main concerns that I will describe below, I'm mostly satisfied with the quality of this paper; thus, I'm willing to increase my score if the authors can address the following concerns. Both of these ideas have been proposed and used in several previous HRL works; thus, this can be seen as a combination of two existing ideas. This idea has been already used in [1-4], even though some of these works in different settings. For example, they predict the "distance" between current state and the sub-goal state (e.g., UVF with -1 step reward [1, 2] or success rate of (random) low-level policy [3], or k-step reachability [4]), and 2) limit the sub-goal generation to choose the near-by subgoals only (e.g., thresholding [1-4]).



Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning

Neural Information Processing Systems

Goal-conditioned hierarchical reinforcement learning (HRL) is a promising approach for scaling up reinforcement learning (RL) techniques. However, it often suffers from training inefficiency as the action space of the high-level, i.e., the goal space, is often large. Searching in a large goal space poses difficulties for both high-level subgoal generation and low-level policy learning. In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a k-step adjacent region of the current state using an adjacency constraint. We theoretically prove that the proposed adjacency constraint preserves the optimal hierarchical policy in deterministic MDPs, and show that this constraint can be practically implemented by training an adjacency network that can discriminate between adjacent and non-adjacent subgoals. Experimental results on discrete and continuous control tasks show that incorporating the adjacency constraint improves the performance of state-of-the-art HRL approaches in both deterministic and stochastic environments.